经典信息提取技术包括建立有关事实的问题和答案。确实,主观信息提取系统在上下文中识别观点和感觉仍然是一个挑战。在基于情感的NLP任务中,在上下文中,最重要的是进攻或仇恨的意见,几乎没有资源来提取信息。为了填补这一重要差距,这篇简短的论文提供了一种新的跨语义和上下文进攻词典,该词典由明确的和隐性的进攻和宣誓的意见表达组成,在两个不同的类别中注释了两种不同的类别:依赖于上下文和与上下文无关的进攻。此外,我们还提供标记来识别仇恨言论。在表达级别评估注释方法,并达到了高通道的一致性。提供的进攻词典有葡萄牙语和英语语言。
translated by 谷歌翻译
Due to the severity of the social media offensive and hateful comments in Brazil, and the lack of research in Portuguese, this paper provides the first large-scale expert annotated corpus of Brazilian Instagram comments for hate speech and offensive language detection. The HateBR corpus was collected from the comment section of Brazilian politicians' accounts on Instagram and manually annotated by specialists, reaching a high inter-annotator agreement. The corpus consists of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level classification (highly, moderately, and slightly offensive), and nine hate speech groups (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). We also implemented baseline experiments for offensive language and hate speech detection and compared them with a literature baseline. Results show that the baseline experiments on our corpus outperform the current state-of-the-art for the Portuguese language.
translated by 谷歌翻译
Since a lexicon-based approach is more elegant scientifically, explaining the solution components and being easier to generalize to other applications, this paper provides a new approach for offensive language and hate speech detection on social media. Our approach embodies a lexicon of implicit and explicit offensive and swearing expressions annotated with contextual information. Due to the severity of the social media abusive comments in Brazil, and the lack of research in Portuguese, Brazilian Portuguese is the language used to validate the models. Nevertheless, our method may be applied to any other language. The conducted experiments show the effectiveness of the proposed approach, outperforming the current baseline methods for the Portuguese language.
translated by 谷歌翻译
We study inductive matrix completion (matrix completion with side information) under an i.i.d. subgaussian noise assumption at a low noise regime, with uniform sampling of the entries. We obtain for the first time generalization bounds with the following three properties: (1) they scale like the standard deviation of the noise and in particular approach zero in the exact recovery case; (2) even in the presence of noise, they converge to zero when the sample size approaches infinity; and (3) for a fixed dimension of the side information, they only have a logarithmic dependence on the size of the matrix. Differently from many works in approximate recovery, we present results both for bounded Lipschitz losses and for the absolute loss, with the latter relying on Talagrand-type inequalities. The proofs create a bridge between two approaches to the theoretical analysis of matrix completion, since they consist in a combination of techniques from both the exact recovery literature and the approximate recovery literature.
translated by 谷歌翻译
This paper presents a corpus annotated for the task of direct-speech extraction in Croatian. The paper focuses on the annotation of the quotation, co-reference resolution, and sentiment annotation in SETimes news corpus in Croatian and on the analysis of its language-specific differences compared to English. From this, a list of the phenomena that require special attention when performing these annotations is derived. The generated corpus with quotation features annotations can be used for multiple tasks in the field of Natural Language Processing.
translated by 谷歌翻译
With the ever-growing popularity of the field of NLP, the demand for datasets in low resourced-languages follows suit. Following a previously established framework, in this paper, we present the UNER dataset, a multilingual and hierarchical parallel corpus annotated for named-entities. We describe in detail the developed procedure necessary to create this type of dataset in any language available on Wikipedia with DBpedia information. The three-step procedure extracts entities from Wikipedia articles, links them to DBpedia, and maps the DBpedia sets of classes to the UNER labels. This is followed by a post-processing procedure that significantly increases the number of identified entities in the final results. The paper concludes with a statistical and qualitative analysis of the resulting dataset.
translated by 谷歌翻译
This article presents the application of the Universal Named Entity framework to generate automatically annotated corpora. By using a workflow that extracts Wikipedia data and meta-data and DBpedia information, we generated an English dataset which is described and evaluated. Furthermore, we conducted a set of experiments to improve the annotations in terms of precision, recall, and F1-measure. The final dataset is available and the established workflow can be applied to any language with existing Wikipedia and DBpedia. As part of future research, we intend to continue improving the annotation process and extend it to other languages.
translated by 谷歌翻译
Evaluating new techniques on realistic datasets plays a crucial role in the development of ML research and its broader adoption by practitioners. In recent years, there has been a significant increase of publicly available unstructured data resources for computer vision and NLP tasks. However, tabular data -- which is prevalent in many high-stakes domains -- has been lagging behind. To bridge this gap, we present Bank Account Fraud (BAF), the first publicly available privacy-preserving, large-scale, realistic suite of tabular datasets. The suite was generated by applying state-of-the-art tabular data generation techniques on an anonymized,real-world bank account opening fraud detection dataset. This setting carries a set of challenges that are commonplace in real-world applications, including temporal dynamics and significant class imbalance. Additionally, to allow practitioners to stress test both performance and fairness of ML methods, each dataset variant of BAF contains specific types of data bias. With this resource, we aim to provide the research community with a more realistic, complete, and robust test bed to evaluate novel and existing methods.
translated by 谷歌翻译
生物学和人造药物需要处理现实世界中的不断变化。我们在四个经典的连续控制环境中研究了这个问题,并通过形态扰动增强。当不同身体部位的长度和厚度变化时,学习势头是挑战性的,因为需要控制政策才能适应形态以成功平衡和推进代理。我们表明,基于本体感受状态的控制策略的表现差,可以通过高度可变的身体配置,而(甲骨文)代理可以访问学习扰动的编码的(甲骨文)的性能要好得多。我们介绍了DMAP,这是一种以生物学启发的,基于注意力的策略网络体系结构。 DMAP将独立的本体感受处理,分布式策略与每个关节的单个控制器以及注意力机制结合在一起,从不同身体部位到不同控制器的动态门感觉信息。尽管无法访问(隐藏的)形态信息,但在所有考虑的环境中,DMAP都可以端对端训练,整体匹配或超越了Oracle代理的性能。因此,DMAP是从生物运动控制中实施原理的,为学习挑战的感觉运动任务提供了强烈的诱导偏见。总体而言,我们的工作证实了这些原则在挑战运动任务中的力量。
translated by 谷歌翻译
来自光场的大量空间和角度信息允许开发多种差异估计方法。但是,对光场的获取需要高存储和处理成本,从而限制了该技术在实际应用中的使用。为了克服这些缺点,压缩感应(CS)理论使光学体系结构的开发能够获得单个编码的光场测量。该测量是使用需要高计算成本的优化算法或深神经网络来解码的。从压缩光场进行的传统差异估计方法需要首先恢复整个光场,然后再恢复后处理步骤,从而需要长时间。相比之下,这项工作提出了通过省略传统方法所需的恢复步骤来从单个压缩测量中进行快速差异估计。具体而言,我们建议共同优化用于获取单个编码光场快照和卷积神经网络(CNN)的光学体系结构,以估计差异图。在实验上,提出的方法估计了与使用深度学习方法重建的光场相当的差异图。此外,所提出的方法在训练和推理方面的速度比估计重建光场差异的最佳方法要快20倍。
translated by 谷歌翻译